Anytime Query-Tuned Kernel Machine Classifiers Via Cholesky Factorization

نویسنده

  • Dennis DeCoste
چکیده

We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste, 2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets. .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors

Support vector machines (and other kernel machines) offer robust modern machine learning methods for nonlinear classification. However, relative to other alternatives (such as linear methods, decision trees and neural networks), they can be orders of magnitude slower at query-time. Unlike existing methods that attempt to speedup querytime, such as reduced set compression (e.g. (Burges, 1996)) a...

متن کامل

A regularized kernel CCA contrast function for ICA

A new kernel based contrast function for independent component analysis (ICA) is proposed. This criterion corresponds to a regularized correlation measure in high dimensional feature spaces induced by kernels. The formulation is a multivariate extension of the least squares support vector machine (LS-SVM) formulation to kernel canonical correlation analysis (CCA). The regularization is incorpor...

متن کامل

Implementing a parallel matrix factorization library on the cell broadband engine

Matrix factorization (or often called decomposition) is a frequently used kernel in a large number of applications ranging from linear solvers to data clustering and machine learning. The central contribution of this paper is a thorough performance study of four popular matrix factorization techniques, namely, LU, Cholesky, QR, and SVD on the STI Cell broadband engine. The paper explores algori...

متن کامل

Backstepping design with local optimality matching

In this study of the nonlinear H∞-optimal control design for strict-feedback nonlinear systems our objective is to construct globally stabilizing control laws to match the optimal control law up to any desired order, and to be inverse optimal with respect to some computable cost functional. Our recursive construction of a cost functional and the corresponding solution to the Hamilton-Jacobi-Isa...

متن کامل

High Performance Cholesky Factorization via Blocking and Recursion That Uses Minimal Storage

We present a high performance Cholesky factorization algorithm , called BPC for Blocked Packed Cholesky, which performs better or equivalent to the LAPACK DPOTRF subroutine, but with about the same memory requirements as the LAPACK DPPTRF subroutine, which runs at level 2 BLAS speed. Algorithm BPC only calls DGEMM and level 3 kernel routines. It combines a recursive algorithm with blocking and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003